119 research outputs found

    Tuning the Senses:How the Pupil Shapes Vision at the Earliest Stage

    Get PDF
    The pupil responds reflexively to changes in brightness and focal distance to maintain the smallest pupil (and thus the highest visual acuity) that still allows sufficient light to reach the retina. The pupil also responds to a wide variety of cognitive processes, but the functions of these cognitive responses are still poorly understood. In this review, I propose that cognitive pupil responses, like their reflexive counterparts, serve to optimize vision. Specifically, an emphasis on central vision over peripheral vision results in pupil constriction, and this likely reflects the fact that central vision benefits most from the increased visual acuity provided by small pupils. Furthermore, an intention to act with a bright stimulus results in preparatory pupil constriction, which allows the pupil to respond quickly when that bright stimulus is subsequently brought into view. More generally, cognitively driven pupil responses are likely a form of sensory tuning: a subtle adjustment of the eyes to optimize their properties for the current situation and the immediate future

    It's all about the transient: Intra-saccadic onset stimuli do not capture attention

    Get PDF
    An abrupt onset stimulus was presented while the participants' eyes were in motion. Because of saccadic suppression, participants did not perceive the visual transient that normally accompanies the sudden appearance of a stimulus. In contrast to the typical finding that the presentation of an abrupt onset captures attention and interferes with the participants' responses, we found that an intra-saccadic abrupt onset does not capture attention: It has no effect beyond that of increasing the set-size of the search array by one item. This finding favours the local transient account of attentional capture over the novel object hypothesis

    The effect of pupil size and peripheral brightness on detection and discrimination performance

    Get PDF
    It is easier to read dark text on a bright background (positive polarity) than to read bright text on a dark background (negative polarity). This positive-polarity advantage is often linked to pupil size: A bright background induces small pupils, which in turn increases visual acuity. Here we report that pupil size, when manipulated through peripheral brightness, has qualitatively different effects on discrimination of fine stimuli in central vision and detection of faint stimuli in peripheral vision. Small pupils are associated with improved discrimination performance, consistent with the positive-polarity advantage, but only for very small stimuli that are at the threshold of visual acuity. In contrast, large pupils are associated with improved detection performance. These results are likely due to two pupil-size related factors: Small pupils increase visual acuity, which improves discrimination of fine stimuli; and large pupils increase light influx, which improves detection of faint stimuli. Light scatter is likely also a contributing factor: When a display is bright, light scatter creates a diffuse veil of retinal illumination that reduces perceived image contrast, thus impairing detection performance. We further found that pupil size was larger during the detection task than during the discrimination task, even though both tasks were equally difficult and similar in visual input; this suggests that the pupil may automatically assume an optimal size for the current task. Our results may explain why pupils dilate in response to arousal: This may reflect an increased emphasis on detection of unpredictable danger, which is crucially important in many situations that are characterized by high levels of arousal. Finally, we discuss the implications of our results for the ergonomics of display design

    OpenSesame: An open-source, graphical experiment builder for the social sciences

    Get PDF
    In the present article, we introduce OpenSesame, a graphical experiment builder for the social sciences. OpenSesame is free, open-source, and cross-platform. It features a comprehensive and intuitive graphical user interface and supports Python scripting for complex tasks. Additional functionality, such as support for eyetrackers, input devices, and video playback, is available through plug-ins. OpenSesame can be used in combination with existing software for creating experiments

    The World (of Warcraft) through the eyes of an expert

    Get PDF
    Negative correlations between pupil size and the tendency to look at salient locations were found in recent studies (e.g., Mathôt et al., 2015). It is hypothesized that this negative correlation might be explained by the mental effort put by participants in the task that leads in return to pupil dilation. Here we present an exploratory study on the effect of expertise on eye-movement behavior. Because there is no available standard tool to evaluate WoW players’ expertise, we built an off-game questionnaire testing players’ knowledge about WoW and acquired skills through completed raids, highest rated battlegrounds, Skill Points, etc. Experts (N = 4) and novices (N = 4) in the massively multiplayer online role-playing game World of Warcraft (WoW) viewed 24 designed video segments from the game that differ in regards with their content (i.e, informative locations) and visual complexity (i.e, salient locations). Consistent with previous studies, we found a negative correlation between pupil size and the tendency to look at salient locations (experts, r =  − .17, p < .0001, and novices, r =  − .09, p < .0001). This correlation has been interpreted in terms of mental effort: People are inherently biased to look at salient locations (sharp corners, bright lights, etc.), but are able (i.e., experts) to overcome this bias if they invest sufficient mental effort. Crucially, we observed that this correlation was stronger for expert WoW players than novice players (Z =  − 3.3, p = .0011). This suggests that experts learned to improve control over eye-movement behavior by guiding their eyes towards informative, but potentially low-salient areas of the screen. These findings may contribute to our understanding of what makes an expert an expert

    The role of object affordances and center of gravity in eye movements toward isolated daily-life objects

    No full text
    International audienceThe purpose of the current study was to investigate to what extent low-level versus high-level effects determine where the eyes land on isolated daily-life objects. We operationalized low-level effects as eye movements toward an object's center of gravity (CoG) or the absolute object center (OC) and high-level effects as visuomotor priming by object affordances. In two experiments, we asked participants to make saccades toward peripherally presented photographs of graspable objects (e.g., a hammer) and to either categorize them (Experiment 1) or to discriminate them from visually matched nonobjects (Experiment 2). Objects were rotated such that their graspable part (e.g., the hammer's handle) pointed toward either the left or the right whereas their action-performing part (e.g., the hammer's head) pointed toward the other side. We found that early-triggered saccades were neither biased toward the object's graspable part nor toward its action-performing part. Instead, participants' eyes landed near the CoG/OC. Only longer-latency initial saccades and refixations were subject to high-level influences, being significantly biased toward the object's action-performing part. Our comparison with eye movements toward visually matched nonobjects revealed that the latter was not merely the consequence of a low-level effect of shape, texture, asymmetry, or saliency. Instead, we interpret it as a higher-level, object-based affordance effect that requires time, and to some extent also foveation, in order to build up and to overcome default saccadic-programming mechanisms

    Coordination effort in joint action is reflected in pupil size

    Get PDF
    Humans often perform visual tasks together, and when doing so, they tend to devise division of labor strategies to share the load. Implementing such strategies, however, is effortful as co-actors need to coordinate their actions. We tested if pupil size - a physiological correlate of mental effort - can detect such a coordination effort in a multiple object tracking task (MOT). Participants performed the MOT task jointly with a computer partner and either devised a division of labor strategy (main experiment) or the labor division was already pre-determined (control experiment). We observed that pupil sizes increase relative to performing the MOT task alone in the main experiment while this is not the case in the control experiment. These findings suggest that pupil size can detect a rise in coordination effort, extending the view that pupil size indexes mental effort across a wide range of cognitively demanding tasks

    A simple way to estimate similarity between pairs of eye movement sequences

    Get PDF
    We propose a novel algorithm to estimate the similarity between a pair of eye movement sequences. The proposed algorithm relies on a straight-forward geometric representation of eye movement data. The algorithm is considerably simpler to implement and apply than existing similarity measures, and is particularly suited for exploratory analyses. To validate the algorithm, we conducted a benchmark experiment using realistic artificial eye movement data. Based on similarity ratings obtained from the proposed algorithm, we defined two clusters in an unlabelled set of eye movement sequences. As a measure of the algorithm's sensitivity, we quantified the extent to which these data-driven clusters matched two pre-defined groups (i.e., the 'real' clusters). The same analysis was performed using two other, commonly used similarity measures. The results show that the proposed algorithm is a viable similarity measure

    Mantra: an open method for object and movement tracking

    Get PDF
    Mantra is a free and open-source software package for object tracking. It is specifically designed to be used as a tool for response collection in psychological experiments and requires only a computer and a camera (a webcam is sufficient). Mantra is compatible with widely used software for creating psychological experiments. In Experiments 1 and 2, we validated the spatial and temporal precision of Mantra in realistic experimental settings. In Experiments 3 and 4, we validated the spatial precision and accuracy of Mantra more rigorously by tracking a computer controlled physical stimulus and stimuli presented on a computer screen
    corecore